333 research outputs found

    Simulator based performance metrics to estimate reliability of control room operators

    Get PDF
    Chemical processes rely on several layers of protection to prevent accidents. One of the most important layers of protection is human operators. Human errors are a key contributor in a majority of accidents today. Estimation of human failure probabilities is a challenge due to the numerous drivers of human error, and still heavily dependent on expert judgment. In this paper, we propose a strategy to estimate the reliability of control room operators by measuring their control performance on a process simulator. The performance of the operator is translated to two metrics - margin-of-failure and available-time to respond to process events - which can be calculated using process operations data that can be generated from training simulator based studies. These metrics offer a qualitative estimate of operators' reliability. We conducted a set of experiments involving 128 students of differing capabilities from two different institutions and tasked to control a simulated ethanol production plant. Our results demonstrate that differences in the performance of expert vs. novice student operators can be clearly distinguished using the metrics

    Microbial conversion of syngas to ethanol

    Get PDF

    Reconstruction and analysis of a genome-scale metabolic model for Scheffersomyces stipitis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Fermentation of xylose, the major component in hemicellulose, is essential for economic conversion of lignocellulosic biomass to fuels and chemicals. The yeast <it>Scheffersomyces stipitis </it>(formerly known as <it>Pichia stipitis</it>) has the highest known native capacity for xylose fermentation and possesses several genes for lignocellulose bioconversion in its genome. Understanding the metabolism of this yeast at a global scale, by reconstructing the genome scale metabolic model, is essential for manipulating its metabolic capabilities and for successful transfer of its capabilities to other industrial microbes.</p> <p>Results</p> <p>We present a genome-scale metabolic model for <it>Scheffersomyces stipitis</it>, a native xylose utilizing yeast. The model was reconstructed based on genome sequence annotation, detailed experimental investigation and known yeast physiology. Macromolecular composition of <it>Scheffersomyces stipitis </it>biomass was estimated experimentally and its ability to grow on different carbon, nitrogen, sulphur and phosphorus sources was determined by phenotype microarrays. The compartmentalized model, developed based on an iterative procedure, accounted for 814 genes, 1371 reactions, and 971 metabolites. In silico computed growth rates were compared with high-throughput phenotyping data and the model could predict the qualitative outcomes in 74% of substrates investigated. Model simulations were used to identify the biosynthetic requirements for anaerobic growth of <it>Scheffersomyces stipitis </it>on glucose and the results were validated with published literature. The bottlenecks in <it>Scheffersomyces stipitis </it>metabolic network for xylose uptake and nucleotide cofactor recycling were identified by in silico flux variability analysis. The scope of the model in enhancing the mechanistic understanding of microbial metabolism is demonstrated by identifying a mechanism for mitochondrial respiration and oxidative phosphorylation.</p> <p>Conclusion</p> <p>The genome-scale metabolic model developed for <it>Scheffersomyces stipitis </it>successfully predicted substrate utilization and anaerobic growth requirements. Useful insights were drawn on xylose metabolism, cofactor recycling and mechanism of mitochondrial respiration from model simulations. These insights can be applied for efficient xylose utilization and cofactor recycling in other industrial microorganisms. The developed model forms a basis for rational analysis and design of <it>Scheffersomyces stipitis </it>metabolic network for the production of fuels and chemicals from lignocellulosic biomass.</p

    Approximation Algorithms for Covering/Packing Integer Programs

    Get PDF
    Given matrices A and B and vectors a, b, c and d, all with non-negative entries, we consider the problem of computing min {c.x: x in Z^n_+, Ax > a, Bx < b, x < d}. We give a bicriteria-approximation algorithm that, given epsilon in (0, 1], finds a solution of cost O(ln(m)/epsilon^2) times optimal, meeting the covering constraints (Ax > a) and multiplicity constraints (x < d), and satisfying Bx < (1 + epsilon)b + beta, where beta is the vector of row sums beta_i = sum_j B_ij. Here m denotes the number of rows of A. This gives an O(ln m)-approximation algorithm for CIP -- minimum-cost covering integer programs with multiplicity constraints, i.e., the special case when there are no packing constraints Bx < b. The previous best approximation ratio has been O(ln(max_j sum_i A_ij)) since 1982. CIP contains the set cover problem as a special case, so O(ln m)-approximation is the best possible unless P=NP.Comment: Preliminary version appeared in IEEE Symposium on Foundations of Computer Science (2001). To appear in Journal of Computer and System Science

    Computing fuzzy rough approximations in large scale information systems

    Get PDF
    Rough set theory is a popular and powerful machine learning tool. It is especially suitable for dealing with information systems that exhibit inconsistencies, i.e. objects that have the same values for the conditional attributes but a different value for the decision attribute. In line with the emerging granular computing paradigm, rough set theory groups objects together based on the indiscernibility of their attribute values. Fuzzy rough set theory extends rough set theory to data with continuous attributes, and detects degrees of inconsistency in the data. Key to this is turning the indiscernibility relation into a gradual relation, acknowledging that objects can be similar to a certain extent. In very large datasets with millions of objects, computing the gradual indiscernibility relation (or in other words, the soft granules) is very demanding, both in terms of runtime and in terms of memory. It is however required for the computation of the lower and upper approximations of concepts in the fuzzy rough set analysis pipeline. Current non-distributed implementations in R are limited by memory capacity. For example, we found that a state of the art non-distributed implementation in R could not handle 30,000 rows and 10 attributes on a node with 62GB of memory. This is clearly insufficient to scale fuzzy rough set analysis to massive datasets. In this paper we present a parallel and distributed solution based on Message Passing Interface (MPI) to compute fuzzy rough approximations in very large information systems. Our results show that our parallel approach scales with problem size to information systems with millions of objects. To the best of our knowledge, no other parallel and distributed solutions have been proposed so far in the literature for this problem

    IN SILICO ANALYSIS FOR THE PRODUCTION OF HIGHER CARBON ALCOHOLS USING SACCHAROMYCES CEREVISIAE

    Get PDF
    Technology for the production of alternative fuels is receiving increased attention owing to concerns on global energy and environmental problems. Using higher carbon alcohols as gasoline substitutes has several advantages compared to ethanol, the first generation biofuel. Higher carbon alcohols also have other applications as flavor/aroma compounds and as building blocks for several other products. Two different pathways for the production of higher carbon alcohols have been recently reported. This work looks at evaluating the different pathways for higher carbon alcohol production and identification of metabolic bottlenecks for their production using Saccharomyces cerevisiae . Quantitative characterization of the metabolic pathways of Saccharomyces cerevisiae is essential for understanding the metabolic behavior of the microorganism. Several mathematical modeling frameworks have been developed to describe and analyze the metabolic behavior of an organism. Stoichiometric modeling is one such approach which relies on mass balances over intracellular metabolites and the assumption of pseudo-steady-state conditions to determine intracellular metabolic fluxes. The development of stoichiometric models (metabolic models) and analysis of intracellular metabolic fluxes have several applications in metabolic engineering and strain improvement. The production of higher carbon alcohols (such as 1-butanol, isobutanol, isopropanol) was analyzed by introducing the pathways into the genome scale metabolic model of Saccharomyces cerevisiae . The yield of higher carbon alcohols obtained from the fermentative and non-fermentative pathways was calculated and compared with maximum theoretical yield. The effect of different industrially relevant carbon sources on the production of higher carbon alcohols was also analyzed. Constraint based analysis was carried out on the genome scale metabolic model to obtain the intracellular metabolic flux distribution during the production of these alcohols. Detailed analysis of the metabolic flux distribution was carried out based on the shadow prices and reduced costs obtained from metabolic flux analysis. The metabolic bottlenecks for the production of higher carbon alcohols and the rate limiting steps in the metabolism were identified based on these analyses. Strategies for enhancing the yield of higher carbon alcohols will be proposed based on these analyses
    corecore